Federated learning achieves joint training of deep models by connecting decentralized data sources, which can significantly mitigate the risk of privacy leakage. However, in a more general case, the distributions of labels among clients are different, called ``label distribution skew''. Directly applying conventional federated learning without consideration of label distribution skew issue significantly hurts the performance of the global model. To this end, we propose a novel federated learning method, named FedMGD, to alleviate the performance degradation caused by the label distribution skew issue. It introduces a global Generative Adversarial Network to model the global data distribution without access to local datasets, so the global model can be trained using the global information of data distribution without privacy leakage. The experimental results demonstrate that our proposed method significantly outperforms the state-of-the-art on several public benchmarks. Code is available at \url{https://github.com/Sheng-T/FedMGD}.
translated by 谷歌翻译
最近,对抗机器学习攻击对实用音频信号分类系统构成了严重的安全威胁,包括语音识别,说话者识别和音乐版权检测。先前的研究主要集中在确保通过在原始信号上产生类似小噪声的扰动来攻击音频信号分类器的有效性。目前尚不清楚攻击者是否能够创建音频信号扰动,除了其攻击效果外,人类还可以很好地看待。这对于音乐信号尤其重要,因为它们经过精心制作,具有可让人的音频特征。在这项工作中,我们将对音乐信号的对抗性攻击作为一种新的感知攻击框架,将人类研究纳入对抗性攻击设计中。具体而言,我们进行了一项人类研究,以量化人类对音乐信号的变化的看法。我们邀请人类参与者根据对原始和扰动的音乐信号对进行评分,并通过回归分析对人类感知过程进行反向工程,以预测给定信号的人类感知的偏差。然后将感知感知的攻击作为优化问题提出,该问题找到了最佳的扰动信号,以最大程度地减少对回归人类感知模型的感知偏差的预测。我们使用感知感知的框架来设计对YouTube版权探测器的现实对抗音乐攻击。实验表明,感知意识攻击会产生对抗性音乐的感知质量明显优于先前的工作。
translated by 谷歌翻译
联合学习(FL)提供了一种高效的分散机器学习框架,其中培训数据仍然在网络中的远程客户端分发。虽然FL实现了使用物联网设备的隐私保留的移动边缘计算框架,但最近的研究表明,这种方法易于来自远程客户端的侧面中毒攻击。要解决FL的中毒攻击,我们提供了一个\ Textit {两阶段}防御算法,称为{lo} cal {ma}恶意的事实{r}(lomar)。在I阶段I中,通过使用内核密度估计方法测量其邻居的相对分布,LOMAR从每个远程客户端进行模型更新。在II阶段,最佳阈值近似以从统计角度来区分恶意和清洁更新。已经进行了四个现实数据集的综合实验,实验结果表明,我们的防御策略可以有效保护FL系统。 {具体来说,标签翻转攻击下的亚马逊数据集上的防御性能表明,与FG + Krum相比,LOMAR从96.0 \%$ 98.8 \%$ 96.0 \%$ 98.8 \%$增加目标标签测试精度,以及90.1美元的总平均测试准确性\%$至97.0 \%$。
translated by 谷歌翻译
与传统机器学习(ML)相比,联邦学习(FL)被认为是解决移动设备的数据隐私问题的吸引力框架。使用Edge Server(ESS)作为中间人在接近度执行模型聚合可以减少传输开销,并且它能够在低延迟FL中实现很大的潜力,其中FL(HFL)的分层体系结构被吸引更多地关注。设计适当的客户选择策略可以显着提高培训性能,并且已广泛用于FL研究。然而,据我们所知,没有专注于HFL的研究。此外,HFL的客户选择面临的挑战比传统的FL更多,例如,客户端 - es对的时变连接和网络运营商的有限预算(否)。在本文中,我们调查了HFL的客户选择问题,其中no no学习成功参与客户的数量以改善培训性能(即,在每轮中选择多个客户端)以及每个ES的有限预算。基于上下文组合多武装强盗(CC-MAB)开发了一个称为上下文知识的在线客户选择(COCS)的在线策略。 COCs观察局部计算和客户端对传输的侧信息(上下文),并使客户选择决策最大化没有给出有限预算的实用程序。理论上,与强凸和非凸HFL上的Oracle策略相比,COCS遗憾地实现了载体遗憾。仿真结果还支持拟议的COCS政策对现实世界数据集的效率。
translated by 谷歌翻译
现代生物医学研究通常收集多视图数据,即在同一组对象上测量的多种类型的数据。高维多视图数据分析中的流行模型是将每个视图的数据矩阵分解为跨所有数据视图常见的潜在因子生成的低级常见源矩阵,对应于每个视图的低级别源矩阵和添加剂噪声矩阵。我们提出了一种用于该模型的新型分解方法,称为基于分解的广义规范相关分析(D-GCCA)。与大多数现有方法使用的欧几里德点产品空间相比,D-GCCA严格地定义了随机变量的L2空间的分解,从而能够为低秩矩阵恢复提供估计一致性。此外,为了良好校准共同的潜在因子,我们对独特的潜在因子施加了理想的正交性限制。然而,现有方法不充分考虑这种正交性,因此可能遭受未检测到的共同源变异的大量损失。我们的D-GCCA通过分离规范变量中的共同和独特的组分,同时从主成分分析的角度享受吸引人的解释,进一步逐步进行一步。此外,我们建议使用常见的或独特潜在因子解释的信号方差的可变级别比例,以选择最受影响的变量。我们的D-GCCA方法的一致估计是通过良好的有限样本数性能建立的,并且具有封闭式表达式,导致有效计算,特别是对于大规模数据。 D-GCCA在最先进的方法上的优越性也在模拟和现实世界数据示例中得到证实。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
In this paper, we investigate the joint device activity and data detection in massive machine-type communications (mMTC) with a one-phase non-coherent scheme, where data bits are embedded in the pilot sequences and the base station simultaneously detects active devices and their embedded data bits without explicit channel estimation. Due to the correlated sparsity pattern introduced by the non-coherent transmission scheme, the traditional approximate message passing (AMP) algorithm cannot achieve satisfactory performance. Therefore, we propose a deep learning (DL) modified AMP network (DL-mAMPnet) that enhances the detection performance by effectively exploiting the pilot activity correlation. The DL-mAMPnet is constructed by unfolding the AMP algorithm into a feedforward neural network, which combines the principled mathematical model of the AMP algorithm with the powerful learning capability, thereby benefiting from the advantages of both techniques. Trainable parameters are introduced in the DL-mAMPnet to approximate the correlated sparsity pattern and the large-scale fading coefficient. Moreover, a refinement module is designed to further advance the performance by utilizing the spatial feature caused by the correlated sparsity pattern. Simulation results demonstrate that the proposed DL-mAMPnet can significantly outperform traditional algorithms in terms of the symbol error rate performance.
translated by 谷歌翻译
Implicit regularization is an important way to interpret neural networks. Recent theory starts to explain implicit regularization with the model of deep matrix factorization (DMF) and analyze the trajectory of discrete gradient dynamics in the optimization process. These discrete gradient dynamics are relatively small but not infinitesimal, thus fitting well with the practical implementation of neural networks. Currently, discrete gradient dynamics analysis has been successfully applied to shallow networks but encounters the difficulty of complex computation for deep networks. In this work, we introduce another discrete gradient dynamics approach to explain implicit regularization, i.e. landscape analysis. It mainly focuses on gradient regions, such as saddle points and local minima. We theoretically establish the connection between saddle point escaping (SPE) stages and the matrix rank in DMF. We prove that, for a rank-R matrix reconstruction, DMF will converge to a second-order critical point after R stages of SPE. This conclusion is further experimentally verified on a low-rank matrix reconstruction problem. This work provides a new theory to analyze implicit regularization in deep learning.
translated by 谷歌翻译
Our situated environment is full of uncertainty and highly dynamic, thus hindering the widespread adoption of machine-led Intelligent Decision-Making (IDM) in real world scenarios. This means IDM should have the capability of continuously learning new skills and efficiently generalizing across wider applications. IDM benefits from any new approaches and theoretical breakthroughs that exhibit Artificial General Intelligence (AGI) breaking the barriers between tasks and applications. Recent research has well-examined neural architecture, Transformer, as a backbone foundation model and its generalization to various tasks, including computer vision, natural language processing, and reinforcement learning. We therefore argue that a foundation decision model (FDM) can be established by formulating various decision-making tasks as a sequence decoding task using the Transformer architecture; this would be a promising solution to advance the applications of IDM in more complex real world tasks. In this paper, we elaborate on how a foundation decision model improves the efficiency and generalization of IDM. We also discuss potential applications of a FDM in multi-agent game AI, production scheduling, and robotics tasks. Finally, through a case study, we demonstrate our realization of the FDM, DigitalBrain (DB1) with 1.2 billion parameters, which achieves human-level performance over 453 tasks, including text generation, images caption, video games playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 would be a baby step towards more autonomous and efficient real world IDM applications.
translated by 谷歌翻译
With the ever-growing model size and the limited availability of labeled training data, transfer learning has become an increasingly popular approach in many science and engineering domains. For classification problems, this work delves into the mystery of transfer learning through an intriguing phenomenon termed neural collapse (NC), where the last-layer features and classifiers of learned deep networks satisfy: (i) the within-class variability of the features collapses to zero, and (ii) the between-class feature means are maximally and equally separated. Through the lens of NC, our findings for transfer learning are the following: (i) when pre-training models, preventing intra-class variability collapse (to a certain extent) better preserves the intrinsic structures of the input data, so that it leads to better model transferability; (ii) when fine-tuning models on downstream tasks, obtaining features with more NC on downstream data results in better test accuracy on the given task. The above results not only demystify many widely used heuristics in model pre-training (e.g., data augmentation, projection head, self-supervised learning), but also leads to more efficient and principled fine-tuning method on downstream tasks that we demonstrate through extensive experimental results.
translated by 谷歌翻译